Howell: TrNN controls need machine consciousness too?
TrNN controls clearly need human consciousness, machine as well?
Table of Contents
from the section Grossberg's c-ART, Transformer NNs, and consciousness?:
3. Are current (semi-manual) "controls" of "Large Language Models (LLMs) going in the direction of machine consciousness, without those involved being aware of this? Will "controls" ultimately require machine consciousness as one of their components, in particular for [learning, evolution] in a stable and robust manner?
Here I am focussing mostly on consciousness at a simple level without dragging in machine [ethics, morality], which would presumably help as well, but I wanted to limit my focus at the current time.
Current [corrections, "controls"] of early generation LLMs require significant human guidance to evolve more effective systems that will avoid some of the problems in the early generations :
- simple errors of logic. Fair enough, although my feeling is that too much is being asked at a very early stage of a system that should sometimes be looked at as being a Congitive [user, operating system, applications programming] interface that ties in well with systems that handle specialized requirements outside of linguistics and libraries.
- hallucinations. This is funny, because I certainly wouldn't call some of the things I see a "Hallucination" if these were humans. It's far beyond that, but maybe its not much different in mainstream [media, academia, policy, etc]. We are all human, we all make mistakes.
- Arguments over whether LLMs possess Artificial General Intelligence (AGI) occasionally appear in articles and blogs.
real-time [autonomous learning, evolution] of LLMs
???
???